BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization

نویسندگان

چکیده

The emergence of attention-based architectures has led to significant improvements in the performance neural sequence-to-sequence models for text summarization. Although these have proved be effective summarizing English-written documents, their portability other languages is limited thus leaving plenty room improvement. In this paper, we present BART-IT, a model, based on BART architecture that specifically tailored Italian language. model pre-trained large corpus Italian-written pieces learn language-specific features and then fine-tuned several benchmark datasets established abstractive experimental results show BART-IT outperforms state-of-the-art terms ROUGE scores spite significantly smaller number parameters. use can foster development interesting NLP applications Beyond releasing research community further applications, also discuss ethical implications behind summarization models.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sequence-to-Sequence RNNs for Text Summarization

In this work, we cast text summarization as a sequence-to-sequence problem and apply the attentional encoder-decoder RNN that has been shown to be successful for Machine Translation (Bahdanau et al. (2014)). Our experiments show that the proposed architecture significantly outperforms the state-of-the art model of Rush et al. (2015) on the Gigaword dataset without any additional tuning. We also...

متن کامل

Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond

In this work, we model abstractive text summarization using Attentional EncoderDecoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentenc...

متن کامل

Improving Semantic Relevance for Sequence-to-Sequence Learning of Chinese Social Media Text Summarization

Current Chinese social media text summarization models are based on an encoderdecoder framework. Although its generated summaries are similar to source texts literally, they have low semantic relevance. In this work, our goal is to improve semantic relevance between source texts and summaries for Chinese social media summarization. We introduce a Semantic Relevance Based neural model to encoura...

متن کامل

Towards Efficient Model for Automatic Text Summarization

Automatic text summarization aims at producing summary from a document or a set of documents. It has become a widely explored area of research as the need for immediate access to relevant and precise information that can effectively represent huge amount of information. Because relevant information is scattered across a given document, every user is faced with the problem of going through a lar...

متن کامل

Introducing the Sequence Model for Text Retrieval

We propose and explore a novel approach, called the sequence model, to text retrieval. The model differs from classical ones in the extent of how positional information of term occurrences is used for relevance judgment. In the sequence model, documents and queries are viewed as sequences of term-position pairs and the relevance of a document to a query is judged by the similarity between their...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Future Internet

سال: 2022

ISSN: ['1999-5903']

DOI: https://doi.org/10.3390/fi15010015